perm filename AAAI.SPE[E84,JMC] blob sn#771284 filedate 1985-07-18 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00009 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	@style(spacing 2)
C00011 00003
C00018 00004		The key thing that ought to be in this lecture is a catalog of the
C00026 00005	you could imagine trying to make a definition that would distinguish a
C00033 00006	is not directly usable without some additional facts about what kill
C00041 00007	is inferred from d and logical reasoning is monotonic and it is monotonic
C00049 00008	decision that certain formulas are to be circumscribed, that is, are to be
C00055 00009		And then there is stuff about the definition of clear that doesn't
C00075 ENDMK
CāŠ—;
@style(spacing 2)
@center[JMC AAAI Presidential Speech 8/9/84]

	There actually are expert system stories and I will take a little
bit of my time by telling three of them.  The first one is a little bit of
a political expert system story was told to me by an American labor union
leader about France.  He said the French had a great expert system that
could predict the future and the present president of France asked it
"what will be the rate of unemployment when my term is up in 1988?"  The
answer was "%0".  Pleased with that "what will be the rate of inflation?"
"%0" "what will be the price of a loaf of bread?"  "2 rubles". Present
when that story was told was an emigree from the Soviet Union he said
``yes, the Russians had such an expert system and Brezniev was brought to
it and he asked it how long until the Soviet Union achieves communism?''
and the computer giggled for a while and came back with the answer ``13
miles'' and so there was great consternation and finally a it was
explained ``comrade Brezniev, as you know, each Party Congress is a step
toward communism.''  The third one was told by an American general and he
said they had this expert system and came the evil day and they asked it
``are the missiles coming across the ocean or over the pole?'' and the
machine's lights flashed for a moment or two and the machine said ``yes''.
They typed back in ``yes, what?'' and the answer came back after another
minute ``yes sir''.  So you see, there are expert systems stories.

	So I am going to talk about what is common sense and my talk is in
the main addressed to reaserchers in the field rather than attempting to
make a general summary of the state of AI for the public or for the
journalists or something like that.  The reason for this apology is I am
going to concentrate on mainly on things that are unsolved rather than
things that have been done.  So I don't want to sound discouraged or
something. I have a habit sometimes of giving myself a fairly grandiose
title and not knowing quite what I am going to say and than doing my best
to live up to it and sometimes it is more successful than at other times.
I think this time the success will be judged to be quite moderate.  But,
anyway, common sense as a topic has had its ups and downs in AI but now
seems to be in up, in particular I want to mention, especially in Northern
California, I want to mention the Common Sense Summer that Jerry Hobbs
talked about which SRI and CSLI at Stanford are doing jointly.

	We have what is common sense and as you see, the answer is at the
moment slightly evasive.  Namely, it does not really answer precisely, but
in any case I am going to break it down and discuss these three topics.
Now, I want to sort of pose a problem and common sense is intended for
successful behavior in a complex world and here we have a picture of a
world in which the trash cans are not standing up as they ought to be and
to add an additional piece of information I will say that this has been
observed to be a recurrent phenomena.  So the question is what to make of
that, and I'll come back to that later as to the various things which are
involved in the statement at the understanding of what is the matter and
what to do about it.  Well, may be I'll say a litte bit.  We are going to
infer that the dogs are respoinsible for this and what to do about it
turned out not to be a very easy problem for me to solve. It took me some
weeks to decide what to do.

	Now, Ed Feigenbam asked the question about common sense .There is
some idea that may be giving computers common sense is a task which is
qualitativelysimilar to the current expert systems, and it just a que
similar to the the current expert systems and it is just a question of a
very large number of pattern action rules of the conventional kind; and as
I see it, it isn't so except that there is a way out by cheating since
pattern action rules are universal and you like some other formalism.  If
you like some other formalism, you can make pattern action rules that will
interpret this other formalism.  Well, we don't want to let them out that
way because you still have to do the other formalism so but nevertheless I
had to qualify my 'no' answer so that it would be correct and of course
the question is 'what is it'.

	Now, as Neil said, if we really understood how to represent common
sense knowledge in a general way then we could have this common sense data
base and as you can see, there are differences of opinion about how large
it ought to be, or would have to be, to be useful.  And my opinion is that
it wouldn't have to be so terribly large.  Now, as Neil said, I have been
interested in this problem for a long time and in 1958 I wrote a paper
called "Programs with Common Sense" which proposed to do what's described
on the slide, to represent these things as sentences of mathemetical
logic; the general facts about the effects of events and actions
especially, the goals to be achieved, a general principle that one should
do what achieves one's goal and then have various ways of getting the
facts of a particular situation into the computer.  For example, the
observations, the television camera might result in sentences, and so
forth.  Then, we could, in principle, reduce the problem to mathematical
logic, namely, we try to deduce a sentence of the form 'should action' and
then we do that action.  And so the original idea was to actually write a
program that would work in that way.  Actually, even in the 1958 paper,
this proposal was qualified by the idea that, of course, you might have
all sorts of special purpose program which could do some of the work
whenever that was efficient and which would then be controlled by the
logic.  Now, in the Allan Knulls 1980 lecture on the logical equivalent of
this podium, he talked about the logic level and made the point that
whether the program actually operated by manipulating logical formulas or
not, one could think about it on the logic level, as he called it, and one
could discuss what the program relieved and what inferences it was capable
of drawing.  The programs with common sense idea, the name that was used
at the time was `advice taker' was not something that was successfully put
into operation and it wasn't very easy to figure out why, that is, what
was wrong with it, but like all AI systems it turned out to be extremely
brittle, everything that you did turned out to be extremely specialized.
One thing that turned out to be true was that ordinary logical deduction
has to be supplemented by some form of nonmonotonic reasoning.  And I have
to mention that I didn't know that until somehow the middle 60's and I
didn't have any idea of how to fomalize it till the middle 70's.  So, the
formalized nonmonotonic reasoning turned out to be quite a hard problem.
It actually turned out in the late 70's there were several different
approaches to nonmonotonic reasoning developed there was in articular Drew
McDermitt and John Doyle developed this nonmonotonic logic and Raymond
Rieder developed his logic of defaults and I developed a system that I
call circumscription.  May be I am getting ahead of myself on these
slides.  I will talk a little bit more about nonmonotonic reasoning a
little bit later.

	To put the problem a little more sharply that is the logic part of
the problem, we can talk about the epistemology of common sense.
Epistemology, if you look in the dictionary, is the theory of knowledge
and in particular its limits and its methods of getting knowledge, and so
forth.  Naturally it's a term that we are attempting to swipe from the
philosophers and I believe that we've sort of half got it out of their
hands and we'll have it completely in another ten years or so.  From the
scientific point of view, what we are trying to do is to split up the AI
problem into manageable parts and in particular to separate the
epistemological problem from the heuristic problem of programming the
search for useful inferences for programming the pattern matching which
may be required in order to do useful inferences.  This is somehow
controversial but I guess on the whole opinion in AI has gone to the idea
of 'yes, this is a useful thing' and that it can often be done although
various people will say: yes, but, I can always be done and of course you
have to agree with that.
	The key thing that ought to be in this lecture is a catalog of the
areas of common sense knowledge.  Well, what is it that people know.  If
you will look at this list that I have put down a little bit critically
you will say 'well I am not sure he has got it' and you ask me, and I will
say 'I am sure I don't ' that these are different subjects that have been
thought about.  Now, the one which is gotten the most action in AI has
been this question about the effects of actions and other events.
Formalizing this was done in the 60's and it's also the core of the work
in AI on planning about which Stan Rosenschein will talk this afternoon.

	Another area that I've just been really beginning to think about
requiring to formalize is the relation between appearance and reality.
Even to mention this involves some philosophical presumptions, so as a
question of philosophical presumptions is an old one.  Bar Hillel attacked
my 1958 paper on the grounds that it had some and he is certainly right.
There are some philo sophical presumption and to axiomatize the relation
between appearance and reality it is to assume that both exist, among
other things.  The interesting thing about the relation between appearance
and reality is that the what you want to do, of course, is go from
appearances to reality.  However, when you make rules that do that, they
turn out to have endless exceptions; whereas the knowledge which turns out
to be stable is the knowledge that goes from reality to appearance and in
the sense of if you have an object here and the lighting is such and such
and observer is there, then that is what it is going to look like.  But
one needn't even think only about visual appearance, we could talk about a
relation between social appearance and social reality.  Well, if this guy
doesn't dislike me and he is close to the salt and I say 'pass the salt'
well, he then will do it.

	Certainly in AI there has been a lot of talking about relations
between parts of objects and the 'is a' hierarchies which have gotten
quite a lot of play recently.  This is beginning to be understood
reasonably well.  My own intuition is that it has been given a little bit
more importance than it deserves.  Knowledge and belief has been examined.
I put in there vision as a dynamic process simply because I was going back
to this appearance and reality. If we go back to the talk that Alex
Pendlan gave yesterday in one of the Price Paper session.  He talked about
going from a general concept of an object and from that to how that object
would look and then matching that against the scene.  Certainly many
people have propose to do vision in that way. That all involves common
sense.  We have communication as a process and now I want to mention
something that...I won't mention that last one yet because there is one I
forgot to put on the slide.  That is, I have been arging a lot about that
pattern-action rules are not an adequate form of common sense knowledge;
but it was a mistake to leave them out altogether.  So I want to say that
rules of the kind, that is there is a large amount of common sense
knowledge which does have the form of 'if you see this, then do that' and
so it was a mistake to leave it out.  I just put it in at this point so I
wouldn't forget it entirely.

	Now I want to mention natural kinds and this is a discovery of the
philosophers and it's a moderately recent discovery.  I haven't sort of
managed to figure out what exactly which philosopher should be be credited
to it.  I think, Hillary Putnam gets certainly some share of the credit.
But the idea is this:  Suppose we try to define some kind of an object.
An example might be a lemon; and so somebody says 'it's a small yellow
fruit' and then you sent this child to the store and you say 'by the way,
buy me a half a dozen blue lemons' and lo and behold, they happen to have
in the store a fruit that smells like a lemon, and is very like a lemon in
other ways, produes something that seems to be lemon juice, but it is
blue.  And I think we could accept that.  And that for example some
geneticist going to work could develop a blue lemon.  So the question is
'how can you say that such a thing is a lemon.  We thought that part of
the definition of a lemon that it be yellow'.  The phoilosophical answer
of natural kinds is that it represents really an observation about the
world rather than something solely done in philosophy is that things
really do clump in that
you could imagine trying to make a definition that would distinguish a
lemon from an orange, let us say, and that it being quite arbitrary if
there were a continuum of fruits between lemons and orange.  But
fortunately for us, there is not a continuum of fruits between lemons and
orange so almost any rule will work and furthermore, there all kinds of
variations on lemons which would still be close to the kind of a lemon.
In particular, when we talk about a natural kind our tendency to make
definitions is partly frustrated by the fact that given individual who can
do perfectly well in distinguishing lemons from oranges doesn't even know
all the properties of lemons; in fact, may be even science, certainly
science itself, doesn't know all the properties of lemons.  But we regard
a lemon as a kind of natural kind.

	What do this say for AI.  What it says for AI is that  our program 
shouldn't entirely try to work with definitions.  If they do that, they will
be very rigid.  It should accept the fact that there are such things as lemons
and that it does not know all the properties of lemons and that there are more
of them t be discovered.  In any case, that does represent a discovery that
the philosophers made which is useful to us.  I seem to have covered the top
about that in talking about appearance and reality and then mention some 
specific facts. One of them, relevant to this particular problem is that dogs
sometimes look for food in trash cans and we can jump to a conclusion that an
overturned trash can was overturned by a dog.  But also it's true that 
mischiveous children sometimes overturn trash cans and fleeing burglars some-
times run into trash cans, to mention an example that you probably didn't 
think of.  

	As you can see, this is a little disjointed and reason why it is
disjointed, is that one can discover a kind of lots of facts about common
sense and that't all I really have been able to do, rather than come down
with a nice solid definition.

	The next one comes in and this is a sort of interesting phenomenon
that I think gives some difficulties for AI.  Suppose we say that a
container is sterile if all the bacteria in it are dead, then we could put
that definition as a clause in a couple of clauses in a prologue program
and it would actually involve a 'not' that is to make the thing in a very
straight forward way.  However, if you put it in a prologue program, then
there is only one way or two ways in which a prologue program could use
the fact.  One is that if you want to know whether a container is sterile,
then you check all the bacteria to see if they are dead.  And do it would
easily generate a program that would do that except that the part of the
program that had defined the bacteria and look at them wouldn't work,
wouldn't be easy to write.  Or, if you want to kill them, then to
sterilize a container then you can of course kill each of the bateria.
But this knowledge I want to characterize as theoretical even though it is
entirely useful, because we don't sort of compile it directly and instead
we use the theory namely, if we want to sterilize a container, then we say
bacteria can't stand heat so we heat the whole container and we say well,
according to theory that ought to kill all the bacteria in it with- out
our having to find each and every one of them.  Or we poison it in some
way with some bacteriacide.  Similarly, if we want to test wether the
container is sterile, then we have this theory about how bacteria will
grow in a suitable medium and we use that.  The point about this, there
are two points about this.  One is that may be you can haggle a little bit
about whether that't common sense or we've gone beyond common sense in
this.  I think at the moment I favor trying to include this much
theoretical knowledge in common sense.  The other point is, you could
argue, is this really the difference between theoretical knowledge and,
one might say, practically immediately usable knowledge.  Certainly, one
can say this, the notion that a container is sterile if all the bacteria
in it are dead
is not directly usable without some additional facts about what kill
bacteria or how bacteria behave.  And of course an important problem for
AI, I think, is going to be be able to use such knowledge.  To give an
example that is certainly well known to its programmers in the development
of Micein, the well known expert system for recommending diagnoses and
treatments of bacterial infections of the blood, the theoretical knowledge
is not directly present.  The experts on bacterial diseases were induced
to try to translate or to use their theoretical knowledge to poduce
practical rules of thumb, and my thinking is that won't always work.

	Now I want to mention that to return to one of the topics of
common sense on which perhaps the most work has been done and this has to
do with the effects of actions in achieving goals.  In 1969 Pa??? and I
wrote a paper called Some Philosophical Problems from the Stand Point of
Artificial Inteligence in which we proposed this result formalism.  So we
introduce a notion of situations denoted by S in these formulas and we
have an event and so S' equals result of E and S is the new situation that
results.  What we want to have are a lot of rules which describe for
particular kinds of situations and particular kinds of events some
properties of the new situation that will result.  There is another idea
which is involved in this situation calculus which I'd like to recommend
to your attention and that is that we regard a situation as being
undescribable completely so the situation in this room today no human will
ever be able to say exactly what it is, in fact, we can imagine that it
contains an infinite amount of information.  All that you can do is say
things about it.  It turns out that it is a key problem of artificial, or
a key aspect of useful formalisms that they contain both finite objects,
or that they refer to both finite objects that can be completely described
and to objects that cannot be completely described.  In the situation
calculus formalism the situations are infinite objects and the events are
finite, that is putting one block on top of another one is taken as
finite.  However, it seems that one might consider a formalism that made
the events also to be infinite objects; so that we have an event, for
example, which is putting this paper clip on the podium, sorry, which has
the property that it is an event of putting the paper clip on the podium
but it had all sorts of additional properties that it was done clumsily,
that it took half a second and so forth.  So we can imagine events as
having infinite collections of properties as well.  When this situation
calculus was proposed as a means of describing the effects of events
certain problems arose.  One of them is called the Frame Problem and it
was how to avoid specifying everything that doesn't change.  If you, for
example, imagine moving blocks and painting them, then you rapidly
discover that you need some axioms, that you seem to need some axioms that
say that moving something doesn't change its color and painting it doesn't
change its position.  Otherwise the new situation that results from an
action is insufficiently described for you to determine what happens next.
Humans don't seem to have a lot of difficulty with that kind of thing and
so it was a question of how to adapt our mathematical logical formalisms
so that they wouldn't either and that took a while.  Another problem that
came in was the qualification problem; how to avoid endless qualifications
in axioms.  This is a problem, not under that name, that Marvin Minsky has
emphasized in discussing the need to qualify the axiom that birds can fly.
So he says, what about these exceptions of dead birds and penguins and
birds that have had their feet encased in concrete and other examples of
that kind.  The point about that last example is that you might imagine
that your common sense database would include the facts about penguins and
you could imagine saying: a bird can fly if it is not a penguin and it is
not an ostrich and so forth.  But, you can't imagine that you could catch
up with Minsky's ability to invent exotic reasons why a bird might not
fly.  So that one would need formalisms then that allow that.  There are
two approaches.  One is to get discouraged about logic and the other one
is to modify it and naturally I have been one who has been inerested in
formalizing nonmonotonic reasoning.

	Mention formalized nonmonotonic reasoning and of course I should
say what "monotonic" means in this connection.  We say that if you infer a
sentence p from a set a of premises and you have some larger set of
premises than p is inferred
is inferred from d and logical reasoning is monotonic and it is monotonic
for rather fundamental reasons that doesn't depend on the particular
logical system.  For example, if you take the logical notion of a proof as
being a sequence of sentences either of which is either an axiom or
premise of some kind or follows from some preceding sentences by an
allowed rule of inference than without saying any more than that you are
already committed to a monotonic system.  Common sense reasoning includes
nonmonotonic steps.  One example is that if you hear that I have a car
here, you might ask me for a ride, but then when you hear that my car is
out of gas then you might decide, well it's unappropriate but then you see
me with this gas can approaching the car and you decide well I'll ask him
for a ride after all.  So when you add a new fact, like the car was out of
gas, then you can lose a conclusion that you previously drew.  As I
mentioned, there are these three approaches to nonmonotonic reasoning that
are in the special issue of Artificial Intelligence in April 1980 and all
three of these approaches have been subsequently developed.  One can ask
what's the difference between these formal monotonic reasoning systems
that just including defaults and the goal of the formal nonmonotonic
reasoning or at least the goals of circumscription that I am pursuing is
to achieve a considerably greater degree of generality so that we can
handle all of Minsky's problems.  Let me mention one that comes up in
missionaries and canibals problem:  you are given the usual missionaries
and canibals problem let us say in English and you are discussing it with
this dance and he says why don't they cross on a bridge.  And you say: it
doesn't say there is a bridge, and he says:  it doesn't say there isn't a
bridge.  You don't want to go around qualifying the problem.What you want
to do is to appeal to somekind of principle that says at least in a
puzzle, the present material objects are to be as those that can be
inferrred by common sense statement of the problem including of course not
merely common sense reasoning but general common sense knowledge.
Missionaries and canibals problem has all sorts of defaults in it.  It
mentions that there is a boat, but it doesn't say that the boat is
operable.  But it is a kind of default rule that a tool is usable for its
intended purpose unless there is information to the contrary.  In the
puzzle, things are rather straight forward because the non- monotonic
inference rules can be taken as more than just conjectures in real life,
the nonmonotonic rules are never conclusive; they are rules of conjecture
rather than rules of inference.  Anyway, one wants to say there that the
objects that are present are just those that have to be or you want to
conjecture that.

	I thought I ought to put in some formulas just so that the
journalists won't think they understand everything and this is one of the
current forms of circumscription.  We are talking about some predicate
vector p, you can think of it as single predicate, for example, on(xy), x
is on y, some particular case.  "On" is the predicate in question.  It can
have one or more arguments. And then e of p and x is a formula involved in
this predicate and x is a free variable in it, and we would like to make
it false for a maximal set of values of x if we could if it was compatible
with the axiom we would like it to be always false, but normally, we will,
for the interesting cases anyway, the axiom about p be such that we can't
make it always false.  Now, we have this formula that says that it is as
false as possible.  And A' of p that I have written down there is the
definition of this new property of p.  In other words, we say that p not
only satisfies A but it satisfies it in such a way that e is as false as
possible. This is with A' of p; and so first of all had to satisfy A, and
then we say & and we have A for all p' here.  And that of course makes
this a formula of second order logic.  I have been a fan of first order
logic for many years, but I've just got myself sort of pushed off into
second order logic for the benefit of the circumscription thing and so
what we are saying is that for all predicates p' that also safisfy the
axiom and such that whenever E with p' in there is true then E with p is
true when that't only possible if the E with p' is equivalent to the E
with p for all x.  So that's kind of formalization in extention of
ordinary mathematical logic which permits you to do some nonmonotonic
reasoning.  The extention is the
decision that certain formulas are to be circumscribed, that is, are to be
minimized.  We talked about circumscribing E of px that't what I call it, 
formula ciurcumscription.  

	The point is that if you have made that decision that you are
going to minimize E of p and x, then what you will get depends of the
axiom A, and the axiom A is the set of facts that you take into account,
and in particular, if you take into account more facts than you may have
lesser ability to minimize.  To give a particualr example of this, I will
show some more formulas.  These have to do with moving and painting blocks
and he idea in this particular case is to demonstrate a solution of the
frame problem.

	First fomula here is a statement that things change their location
without(?) reason.  The right hand half of this formula says: the location
of x in the result of the event E and S is the same as it was before.  The
left hand side, of course, of this implication is then a premise and that
says that unless some aspect of x, the object and the event of the
situation is abnormal, then this is true.  So, this is an equivalent in
some sense, though more general, of a kind of strips assumption that
things don't change their location unless there is a reason.

	The second one says that things don't change their color unless
there is a reason; for something to change its color it has to be abnormal
in a different aspect than in the case of changing its position.

	The next one is sort of a key thing.  In the action of moving x to
L it is abnormal in aspect 1.  So that says with regard to this particular
action is that it has the effect of turning off this rule.

	Having givinen that much let me say a little bit about the role of
circumscriptions.  The thing that we are going to circumscribe in order to
use these rules is abnormality.  We are going to circumscribe the formulas
"ab the" (I should have written it up there) but in any case, we are going
to make as few things abnormal as possible and what this says is: well,
there is not help for it; if you move something then, at least in this
aspect, it's abnormal, it might move, its location might change.  Then we
have something that says: indeed unless there is something abnormal when
you do move something, then its location to a location L when you perform
this action, then it is going to be located at L.

	The next two have to do with painting and treat them in a very
similar way that moving was treated, that this one turns off the axiom
that things don't change their colors for the particular action of
painting it and this one says unless something further is abnormal it
really does change its color suitably when you paint it.

	Then there are few little facts here which say that well things
might be abnormal namely, in aspect 3, namely if the top of the object
that in this particular blocks world if the top of the object we want to
move is not clear, or if the location where you want to move it is not
clear, or if the object is very heavy, or you are contemplating moving the
object to its own top, then it is abnormal and therefore you cannot
conclude that it will move there.
	And then there is stuff about the definition of clear that doesn't
have any special abnormality involved in it, and then finally we have
something that unless we know something is a trivial object like a speck
of dust, that's what I mean trivial in this case, then it is assumed to be
not trivial.

TAPE # 2

	So this represents at least one approach to extending the use of
first order logic in order to handle these things that it didn't handle
before and therefore trying to say 'well, this is an approach toward devising
suitable axioms that one could hope to put in a common sense database
about moving objects'.My opinion is that we are not there yet by a fair margin.
These are more general than something that have been done but still not 
general enough.

	I have put in the solution to this problem at this point.  I really
should have given a little propaganda before I put up the slide.Let me mention
a little bit that there remain quite a few problems before we can put stuff in
the common sense database that would enable the computer program to solve this
problem or even less to accept a solution to it and presumably to reject all
other solutions.  Let's talk about rejection first.

	So you say these dogs are knocking over the trash can, why not chase
away a dog when it shows signs of knocking over a trash can and of course the
answer is'that's fine if you happen to be there but you can't afford to stay
around your trash cans all the time waiting to chase away dogs. 

	Now this little pit of common sense reasoning does require some kind 
of formalization if what you really want first of all to have the facts in 
your common sense database that refer to repeated events that enable you to
say that you would really have to stay out there all the time and so forth and
so on.  Then, a lot of people have emotional reaction to this problem, that is,
they think about attacking the dog in some way or attacking their neighbors in
some way if it is their neighbors' dog.  I was never really sure whether this
was this was the neighbors' dog or whether it was perhaps my dog.  One TV producer
of my acquaintance even said: well, what I did was put lie in the trash cans I
regarded this solution as morally unacceptable and probably wouldn't work 
anyway.  

	So, the eventual solution turned out to be to go down to the hardware
store and buy a couple of hooks and hang the trash can on the hooks and 
now we need two facts, or we need two things to say.  One is, we'd like an
argument that will work but of course, it is also relevant that I tried it and
it did work.  

	Now, let me go through a few more aspects of common sense reasoning.  
This one has to do with human behavior and what I want to illustrate here are
two problems of common sense.  The principle of rationality is one that Alan 
Newell mentioned in his first AI Presidential Address, that he gave the first
Presidential AI Address, I don't know whether he would agree to be elected 
president again to give a second one.  So the principle of rationality is
that it will do what it believes will achieve its goals and this was discussed in
the following aspects: we would like to be able to ascribe beliefs and goals to
machines, at least we have discussed this, that is, Newell discussed it, I have
discussed it at various places and roughly speaking, what you do is that you want,
acutally the reason why we ascribe beliefs and goals to each other, to other human
beings is a substantial part of human behavior, and we hope that at sometime of
behavior of certain kind of programs can be accounted for by the ascription of
beliefs and goals.  This can get elaborated in various ways, well may be it won't
do it because some obstacles may arise so we can elaborate itby saying the it
intends to do what it believes will achieve its goals.  Otherwise, we can 
contract it, it will do what will achieve its goal or it will even achieve its
goals.  Let me propagandize a little bit some of the contractions as being in some
sense present in the database.  

	Suppose I say well that's all on this slide, than you imagine 
I'll push the button for the next slide, you say it will do what will achieve its
goal.  You don't do this by saying: McCarthy intends to do what he believes
will achieve his goals, he wants to show the audience the next slide and 
he believes that pushing the button will produce the next slide and therefore he
intends to push the button.  You say: he wants the next slide, he'll push the
button.  So this represents a kind of non-monotonic contraction. It is non-monotonic
because there might afterall be some exceptions.  You can imagine that I am suddenly
struck by some mental effect and fogotten that pushing the button will change the
slide.  Now, the point is that we need contractions and eleborations and 
the principle and common sense reasoning manages to go back and forth among these
things. 

	That's just another example of this kind of thing, probably having explained
the other one fairly well I don't need this one.

	There are lots of open questions about common sense and so I want to 
mention a few to terminate this lecture.  

	The biggest one which is hanging around for a long time in terms of
formalization is how handle concurrent events.  While I am doing this something
else is occuring and there doesn't really exist any good axiomatic theory of this.
Curiously enough, there exists of course a theory of concurrent computer programs
but it doesn't seem to be very relevant.  First of all, it is to this AI problem.
It's in a large measure concerned with ways of achieving synchronization and that
isn't something that we ordinarily have to think about in talking about while I 
am giving this lecture something else is happening somewhere else, the audience
is getting impatient or whatever.  

	Another notion I want to mention has to do with the proximate theories.
Let me explain this one a bit.  We might regard the world as deterministic in
the sense that the present state determines all of the future states, or it 
determines them probablistically or determines them quanto-mechanically it doesn't
much matter exactly.  In terms of common sense thinking, we use approximate
theories that are very far from determinstic, namely, the theory uses a model that
has inputs from the outside.  The example I want to give is the following: 
Two ski instructors are watching their student and the student falls down and one 
of them says: he fell because he didn't bend his knees (you're supposed to do
that) and the other one says: not, that't not right, he fell because he didn't put 
his weight on his downhill ski when he was turning (yu're supposed to do that 
too) and so then they look at a movie and one of agrees that the other one was
right.  Now, the theory that they have of skiing is a kind of stick-figure
theory and furthermore, it has a kind of input from the outside, namely, they are
not concerned with the psychology of the student as to why he didn't bend his
knees (at least not at that level of the argument) or why he didn't put 
his weight on his downhill ski, they believe that if you are skiing straight and
if you don't bend your knees you are likely to fall when you hit a bump similar
consequences at the other event.  Granularity is a slogan now that Jerry Obbs is
using to discuss similar phenomena.  

	The next two there, which facts can be used directly in pattern action
rules and which facts can be used directly as fragments of logic programs is related
to this example that I gave of the sterile container.  

	Another remark is that the same facts are often used in various ways.  We
don't know how to express heuristic information as facts.  We have a lot of 
problems with maintaining modularity.  

	This is my last slide.  I want to mention a couple of other things.  Quite
some time ago, may be in the 60's even Hubert Dreifus wrote a book WHAT COMPUTERS
CAN'T DO.  It was a sort of an attack on our Artificial Intelligence and it has
a sort of journalism part, and the journalism part was saying: look, so and so
said this would be done by this date and it wasn't, or so and so exaggerated 
the performane of this program, but then it also had he part about saying: well,
there are these things that computers lack and must lack.  The argument for
the "must lack" was not given, but among the things that computers were said to
lack was ambiguity tolerance.  It really wasn't said what ambiguity tolerance
was.  Here is my example of ambiguity tolerance.  

	Suppose a law is passed saying: it's a crime to attempt to bribe a public
official and for 20 years some people are tried for this, and some people are 
indicted some people are convicted and some people are acquitted.  And after 20
years along comes some smart lawyer and says: you have shown that my client 
offered this man $5,000 to see if he can get out of his drunk driving conviction,
but you never proved that my client knew that the man was a commissioner  of motor
vehicles.  My client might have thought he was just another lawyer and the fact that
he really was a commissioner of motor vehicle is irrelevant.  So, does attempting
to bribe a public official involve essentially knowing that the individual was
a public official.  That's the first kind of question.  The second one, was: well,
 you offered evidence that my client offered this man $5,000 to fix his drunk 
driving conviction under the impression that he was a commissioner of motor
vehicles, but as in fact we see that the governor had never signed his commission
properly so that the men although everybody thought he was, was not in fact a 
public official.  Does attempting to bribe a public official involve that the
person you are offering this money to must actually be a public official or is
it sufficient that you only think so. As it happens, philosophers and lawyers
are interested in this kind of question and will haggle over it.  From the point
of view of AI, there is something much more important than that.  And that is that
twenty years went by before anybody found a problem.  Now, here we have a situatinn
where this concept was inherently ambiguous and indeed both the philosophers and
the lawyers find it ambiguous and the judges sort of decide it in an ad hoc way.

	In a related problem in a conspiracy to assault a federal official, the
courts found against criminals on both grounds.  If you hit this federal official,
you were assaulting a federal official even if you didn't know.  And if you hit
somebody else thinking he was a federal official that's still the same crime.

	What is involved here from the AI point of view is: it's clear that AI
would be in real trouble if in order to get common sense, we had to solve all
the problems of ambiguity in advance.  So we need a formalism that will use some
kind of non-monotonic reasoning which were to say: this concept may be ambiguous
but it isn't ambiguous in this particular case unless there is some reason to
believe it is.  I should mention that my current cure-all for all the problems 
of AI is non-monotonic reasoning, if you haven't guessed that already.  So
ambiguity tolerance can be treated in this way.

	A further problem which is sort of related is what I came to call 
elaboration tolerance and that seems to be:  we have very simple rules to deal
with situations but we can elaborate them in some arbitrary way.  to take the
most simple elaboration: after I get done here this afternoon perhaps I will drive
my car back to the hotel, but then it turns out that we have to elaborate that
namely, the car is in fact (hopefully not entirely) almost out of gas and therefore
I want to go by some gas station.  But you can think still of some further 
elaboration that might be required in terms of the simple action like: if there
is this ticket on it, then I might have to do something about the ticket and so
forth and so on.  

	Let me summarize by emphasizing the apology I made at the beginning 
because now you will probably be more prepared than you were in the beginning
to believe that an apology is called for--which is to say, I feel that I have not
been able to answer except in the evasive way of my first slide the general 
question of what is common sense, but I've been able to say various things about
common sense, some of which are, I hope, relevant to the construction of 
Artificial Intelligence systems.